108 research outputs found

    A Case for Peering of Content Delivery Networks

    Full text link
    The proliferation of Content Delivery Networks (CDN) reveals that existing content networks are owned and operated by individual companies. As a consequence, closed delivery networks are evolved which do not cooperate with other CDNs and in practice, islands of CDNs are formed. Moreover, the logical separation between contents and services in this context results in two content networking domains. But present trends in content networks and content networking capabilities give rise to the interest in interconnecting content networks. Finding ways for distinct content networks to coordinate and cooperate with other content networks is necessary for better overall service. In addition to that, meeting the QoS requirements of users according to the negotiated Service Level Agreements between the user and the content network is a burning issue in this perspective. In this article, we present an open, scalable and Service-Oriented Architecture based system to assist the creation of open Content and Service Delivery Networks (CSDN) that scale and support sharing of resources with other CSDNs.Comment: Short Article (Submitted in DS Online as Work in Progress

    Matching independent global constraints for composite web services

    Full text link
    Service discovery employs matching techniques to select ser-vices by comparing their descriptions against user constraints. Semantic-based matching approaches achieve higher recall than syntactic-based ones (as they employ ontological rea-soning mechanisms to match syntactically heterogeneous de-scriptions). However, semantic-based approaches still have problems (e.g. lack of scalability as an exhaustive search is often performed to located services conforming to con-straints). This paper proposes two approaches that deal with the problem of scalability/performance for composite service location. First, services are indexed based on the val-ues they assign to their restricted attributes (the attributes restricted by a given constraint). Then, services that as-sign “conforming values ” to those attributes are combined to form composite services. The first proposed approach ex-tends a local optimisation technique to perform the latter, since identifying such values is NP-hard. However, this ap-proach returns false negatives since the local optimisation technique does not consider all the values. Hence, a second approach that derives conforming values using domain rules is defined. The used rules are returned with each composite service so that a user can understand the context in which it is retrieved. Results obtained from the experiments that varied the number of available services demonstrate that the performance of the local optimisation-based approach is 76% better than existing semantic-based approaches and recall is 98 % higher than syntactic-based approaches

    RPDP: An Efficient Data Placement based on Residual Performance for P2P Storage Systems

    Full text link
    Storage systems using Peer-to-Peer (P2P) architecture are an alternative to the traditional client-server systems. They offer better scalability and fault tolerance while at the same time eliminate the single point of failure. The nature of P2P storage systems (which consist of heterogeneous nodes) introduce however data placement challenges that create implementation trade-offs (e.g., between performance and scalability). Existing Kademlia-based DHT data placement method stores data at closest node, where the distance is measured by bit-wise XOR operation between data and a given node. This approach is highly scalable because it does not require global knowledge for placing data nor for the data retrieval. It does not however consider the heterogeneous performance of the nodes, which can result in imbalanced resource usage affecting the overall latency of the system. Other works implement criteria-based selection that addresses heterogeneity of nodes, however often cause subsequent data retrieval to require global knowledge of where the data stored. This paper introduces Residual Performance-based Data Placement (RPDP), a novel data placement method based on dynamic temporal residual performance of data nodes. RPDP places data to most appropriate selected nodes based on their throughput and latency with the aim to achieve lower overall latency by balancing data distribution with respect to the individual performance of nodes. RPDP relies on Kademlia-based DHT with modified data structure to allow data subsequently retrieved without the need of global knowledge. The experimental results indicate that RPDP reduces the overall latency of the baseline Kademlia-based P2P storage system (by 4.87%) and it also reduces the variance of latency among the nodes, with minimal impact to the data retrieval complexity

    Context-aware Cardiac Monitoring for Early Detection of Heart Diseases

    Get PDF
    Abstract The aim of this paper is to propose a scalable contextaware framework for early detection of several cardiovascular diseases by continuous monitoring using smart sensors and utilizing the strength of cloud computing. By constant sampling of ECG signal, vital signs, and activities our system detects possible symptoms of heart disease and alerts user by delivering context-aware service using flexible output modalities. A non-context-aware system that makes a decision based only on abnormal ECG signal can generate false alerts at high rate. Our proposed solution aims to reduce that rate by bringing different contexts in decision making process. As a proof of concept, we developed a simulated prototype to detect long term health risk of Premature Atrial Contraction (PAC), a common form of cardiac arrhythmia. The system can classify ECG signals as PAC using appropriate feature selection and learning algorithm. By tracking the stored context history and personal profile in the cloud database, our system detects smoking habit, alcohol consumption, caffeine intake of the user. It can also detect activities like stress, hypertension, and anxiety using different physiological parameters of the user and capable of sending situational warning notifications. Thus, this model can be a new mechanism for heart disease detection

    Spring framework in smart proxy transaction model

    Get PDF
    This paper explores adoption of open source application framework in Smart Proxy (sProxy) Transaction model for transaction support. An open source application framework - Spring Framework is plugged into the Smart Proxy (sProxy) Transactional model to support transactional properties. Spring Framework in the sProxy Transaction model increases the transactional interoperability in Web Services context. © 2009 IEEE

    Optimizing the Transition Waste in Coded Elastic Computing

    Full text link
    Distributed computing, in which a resource-intensive task is divided into subtasks and distributed among different machines, plays a key role in solving large-scale problems, e.g., machine learning for large datasets or massive computational problems arising in genomic research. Coded computing is a recently emerging paradigm where redundancy for distributed computing is introduced to alleviate the impact of slow machines, or stragglers, on the completion time. Motivated by recently available services in the cloud computing industry, e.g., EC2 Spot or Azure Batch, where spare/low-priority virtual machines are offered at a fraction of the price of the on-demand instances but can be preempted in a short notice, we investigate coded computing solutions over elastic resources, where the set of available machines may change in the middle of the computation. Our contributions are two-fold: We first propose an efficient method to minimize the transition waste, a newly introduced concept quantifying the total number of tasks that existing machines have to abandon or take on anew when a machine joins or leaves, for the cyclic elastic task allocation scheme recently proposed in the literature (Yang et al. ISIT'19). We then proceed to generalize such a scheme and introduce new task allocation schemes based on finite geometry that achieve zero transition wastes as long as the number of active machines varies within a fixed range. The proposed solutions can be applied on top of every existing coded computing scheme tolerating stragglers.Comment: 16 page

    Machine Learning to Ensure Data Integrity in Power System Topological Network Database

    Get PDF
    Operational and planning modules of energy systems heavily depend on the information of the underlying topological and electric parameters, which are often kept in database within the operation centre. Therefore, these operational and planning modules are vulnerable to cyber anomalies due to accidental or deliberate changes in the power system database model. To validate, we have demonstrated the impact of cyber-anomalies on the database model used for operation of energy systems. To counter these cyber-anomalies, we have proposed a defence mechanism based on widely accepted classification techniques to identify the abnormal class of anomalies. In this study, we find that our proposed method based on multilayer perceptron (MLP), which is a special class of feedforward artificial neural network (ANN), outperforms other exiting techniques. The proposed method is validated using IEEE 33-bus and 24-bus reliability test system and analysed using ten different datasets to show the effectiveness of the proposed method in securing the Optimal Power Flow (OPF) module against data integrity anomalies. This paper highlights that the proposed machine learning-based anomaly detection technique successfully identifies the energy database manipulation at a high detection rate allowing only few false alarms
    corecore